首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   99797篇
  免费   5646篇
  国内免费   5074篇
电工技术   4821篇
技术理论   5篇
综合类   10245篇
化学工业   13994篇
金属工艺   5924篇
机械仪表   3645篇
建筑科学   5426篇
矿业工程   1810篇
能源动力   2988篇
轻工业   7098篇
水利工程   2491篇
石油天然气   4545篇
武器工业   830篇
无线电   7474篇
一般工业技术   14819篇
冶金工业   2813篇
原子能技术   2274篇
自动化技术   19315篇
  2024年   85篇
  2023年   371篇
  2022年   602篇
  2021年   897篇
  2020年   1313篇
  2019年   1242篇
  2018年   1362篇
  2017年   1406篇
  2016年   1913篇
  2015年   2595篇
  2014年   4569篇
  2013年   5338篇
  2012年   4882篇
  2011年   5524篇
  2010年   4651篇
  2009年   6028篇
  2008年   5934篇
  2007年   6525篇
  2006年   5909篇
  2005年   4974篇
  2004年   4240篇
  2003年   4098篇
  2002年   4039篇
  2001年   3056篇
  2000年   3360篇
  1999年   3122篇
  1998年   2611篇
  1997年   2475篇
  1996年   2591篇
  1995年   2742篇
  1994年   2481篇
  1993年   1526篇
  1992年   1519篇
  1991年   1056篇
  1990年   771篇
  1989年   692篇
  1988年   649篇
  1987年   378篇
  1986年   231篇
  1985年   372篇
  1984年   414篇
  1983年   432篇
  1982年   330篇
  1981年   404篇
  1980年   272篇
  1979年   119篇
  1978年   112篇
  1977年   70篇
  1976年   41篇
  1975年   56篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
101.
Based on multiphase field conception and integrated with the idea of vectorvalued phase field, a phase field model for typical allotropic transformation of solid solution is proposed. The model takes the non-uniform distribution of grain boundaries of parent phase and crystal orientation into account in proper way, as being illustrated by the simulation of austenite to ferrite transformation in low carbon steel. It is found that the misorientation dependent grain boundary mobility shows strong influence on the formation of ferrite morphology comparing with the weak effect exerted by misorientation dependent grain boundary energy. The evolution of various types of grain boundaries are quantitatively characterized in terms of its respective grain boundary energy dissipation. The simulated ferrite fraction agrees well with the expectation from phase diagram, which verifies this model.  相似文献   
102.
Mobile cloud computing is an emerging field that is gaining popularity across borders at a rapid pace. Similarly, the field of health informatics is also considered as an extremely important field. This work observes the collaboration between these two fields to solve the traditional problem of extracting Electrocardiogram signals from trace reports and then performing analysis. The developed system has two front ends, the first dedicated for the user to perform the photographing of the trace report. Once the photographing is complete, mobile computing is used to extract the signal. Once the signal is extracted, it is uploaded into the server and further analysis is performed on the signal in the cloud. Once this is done, the second interface, intended for the use of the physician, can download and view the trace from the cloud. The data is securely held using a password-based authentication method. The system presented here is one of the first attempts at delivering the total solution, and after further upgrades, it will be possible to deploy the system in a commercial setting.  相似文献   
103.
With the popularity of sensor-rich mobile devices, mobile crowdsensing (MCS) has emerged as an effective method for data collection and processing. However, MCS platform usually need workers’ precise locations for optimal task execution and collect sensing data from workers, which raises severe concerns of privacy leakage. Trying to preserve workers’ location and sensing data from the untrusted MCS platform, a differentially private data aggregation method based on worker partition and location obfuscation (DP-DAWL method) is proposed in the paper. DP-DAWL method firstly use an improved K-means algorithm to divide workers into groups and assign different privacy budget to the group according to group size (the number of workers). Then each worker’s location is obfuscated and his/her sensing data is perturbed by adding Laplace noise before uploading to the platform. In the stage of data aggregation, DP-DAWL method adopts an improved Kalman filter algorithm to filter out the added noise (including both added noise of sensing data and the system noise in the sensing process). Through using optimal estimation of noisy aggregated sensing data, the platform can finally gain better utility of aggregated data while preserving workers’ privacy. Extensive experiments on the synthetic datasets demonstrate the effectiveness of the proposed method.  相似文献   
104.
This paper examines the causal relationship between oil prices and the Gross Domestic Product (GDP) in the Kingdom of Saudi Arabia. The study is carried out by a data set collected quarterly, by Saudi Arabian Monetary Authority, over a period from 1974 to 2016. We seek how a change in real crude oil price affects the GDP of KSA. Based on a new technique, we treat this data in its continuous path. Precisely, we analyze the causality between these two variables, i.e., oil prices and GDP, by using their yearly curves observed in the four quarters of each year. We discuss the causality in the sense of Granger, which requires the stationarity of the data. Thus, in the first Step, we test the stationarity by using the Monte Carlo test of a functional time series stationarity. Our main goal is treated in the second step, where we use the functional causality idea to model the co-variability between these variables. We show that the two series are not integrated; there is one causality between these two variables. All the statistical analyzes were performed using R software.  相似文献   
105.
As an unsupervised learning method, stochastic competitive learning is commonly used for community detection in social network analysis. Compared with the traditional community detection algorithms, it has the advantage of realizing the timeseries community detection by simulating the community formation process. In order to improve the accuracy and solve the problem that several parameters in stochastic competitive learning need to be pre-set, the author improves the algorithms and realizes improved stochastic competitive learning by particle position initialization, parameter optimization and particle domination ability self-adaptive. The experiment result shows that each improved method improves the accuracy of the algorithm, and the F1 score of the improved algorithm is 9.07% higher than that of original algorithm.  相似文献   
106.
In this paper, the supervised Deep Neural Network (DNN) based signal detection is analyzed for combating with nonlinear distortions efficiently and improving error performances in clipping based Orthogonal Frequency Division Multiplexing (OFDM) ssystem. One of the main disadvantages for the OFDM is the high Peak to Average Power Ratio (PAPR). The clipping is a simple method for the PAPR reduction. However, an effect of the clipping is nonlinear distortion, and estimations for transmitting symbols are difficult despite a Maximum Likelihood (ML) detection at the receiver. The DNN based online signal detection uses the offline learning model where all weights and biases at fullyconnected layers are set to overcome nonlinear distortions by using training data sets. Thus, this paper introduces the required processes for the online signal detection and offline learning, and compares error performances with the ML detection in the clipping-based OFDM systems. In simulation results, the DNN based signal detection has better error performance than the conventional ML detection in multi-path fading wireless channel. The performance improvement is large as the complexity of system is increased such as huge Multiple Input Multiple Output (MIMO) system and high clipping rate.  相似文献   
107.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   
108.
In this article, a new generalization of the inverse Lindley distribution is introduced based on Marshall-Olkin family of distributions. We call the new distribution, the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility for modeling lifetime data. The new distribution includes the inverse Lindley and the Marshall-Olkin inverse Lindley as special distributions. Essential properties of the generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated including, quantile function, ordinary moments, incomplete moments, moments of residual and stochastic ordering. Maximum likelihood method of estimation is considered under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators as well as approximate confidence intervals of the population parameters are discussed. A comprehensive simulation study is done to assess the performance of estimates based on their biases and mean square errors. The notability of the generalized Marshall-Olkin inverse Lindley model is clarified by means of two real data sets. The results showed the fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power extended exponential and Lindley distributions.  相似文献   
109.
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify funding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   
110.
Host cardinality estimation is an important research field in network management and network security. The host cardinality estimation algorithm based on the linear estimator array is a common method. Existing algorithms do not take memory footprint into account when selecting the number of estimators used by each host. This paper analyzes the relationship between memory occupancy and estimation accuracy and compares the effects of different parameters on algorithm accuracy. The cardinality estimating algorithm is a kind of random algorithm, and there is a deviation between the estimated results and the actual cardinalities. The deviation is affected by some systematical factors, such as the random parameters inherent in linear estimator and the random functions used to map a host to different linear estimators. These random factors cannot be reduced by merging multiple estimators, and existing algorithms cannot remove the deviation caused by such factors. In this paper, we regard the estimation deviation as a random variable and proposed a sampling method, recorded as the linear estimator array step sampling algorithm (L2S), to reduce the influence of the random deviation. L2S improves the accuracy of the estimated cardinalities by evaluating and remove the expected value of random deviation. The cardinality estimation algorithm based on the estimator array is a computationally intensive algorithm, which takes a lot of time when processing high-speed network data in a serial environment. To solve this problem, a method is proposed to port the cardinality estimating algorithm based on the estimator array to the Graphics Processing Unit (GPU). Experiments on real-world highspeed network traffic show that L2S can reduce the absolute bias by more than 22% on average, and the extra time is less than 61 milliseconds on average.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号